8
8.0

Sep 22, 2013
09/13

by
Seiji Miyoshi; Masato Okada

texts

######
eye 8

######
favorite 0

######
comment 0

We have analyzed the generalization performance of a student which slowly switches ensemble teachers. By calculating the generalization error analytically using statistical mechanics in the framework of on-line learning, we show that the dynamical behaviors of generalization error have the periodicity that is synchronized with the switching period and the behaviors differ with the number of ensemble teachers. Furthermore, we show that the smaller the switching period is, the larger the...

Source: http://arxiv.org/abs/0805.0425v3

13
13

Sep 20, 2013
09/13

by
Seiji Miyoshi; Masato Okada

texts

######
eye 13

######
favorite 0

######
comment 0

We analyze the generalization performance of a student in a model composed of linear perceptrons: a true teacher, ensemble teachers, and the student. Calculating the generalization error of the student analytically using statistical mechanics in the framework of on-line learning, it is proven that when learning rate $\eta 1$, the properties are completely reversed. If the variety of the ensemble teachers is rich enough, the direction cosine between the true teacher and the student becomes unity...

Source: http://arxiv.org/abs/physics/0601162v1

24
24

Jul 20, 2013
07/13

by
Seiji Miyoshi; Masato Okada

texts

######
eye 24

######
favorite 0

######
comment 0

Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear...

Source: http://arxiv.org/abs/cond-mat/0609568v1

9
9.0

Sep 18, 2013
09/13

by
Seiji Miyoshi; Masato Okada

texts

######
eye 9

######
favorite 0

######
comment 0

In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine or due to noises. The generalization performance of a new student supervised by a moving machine has been analyzed. A model composed of a true teacher, a moving teacher and a student that are all linear perceptrons with noises has been treated analytically using statistical mechanics. It has been proven that...

Source: http://arxiv.org/abs/physics/0509050v1

14
14

Sep 18, 2013
09/13

by
Seiji Miyoshi; Masato Okada

texts

######
eye 14

######
favorite 0

######
comment 0

It is known that storage capacity per synapse increases by synaptic pruning in the case of a correlation-type associative memory model. However, the storage capacity of the entire network then decreases. To overcome this difficulty, we propose decreasing the connecting rate while keeping the total number of synapses constant by introducing delayed synapses. In this paper, a discrete synchronous-type model with both delayed synapses and their prunings is discussed as a concrete example of the...

Source: http://arxiv.org/abs/cond-mat/0305517v1

14
14

Sep 19, 2013
09/13

by
Seiji Miyoshi; Hiro-Fumi Yanai; Masato Okada

texts

######
eye 14

######
favorite 0

######
comment 0

The synapses of real neural systems seem to have delays. Therefore, it is worthwhile to analyze associative memory models with delayed synapses. Thus, a sequential associative memory model with delayed synapses is discussed, where a discrete synchronous updating rule and a correlation learning rule are employed. Its dynamic properties are analyzed by the statistical neurodynamics. In this paper, we first re-derive the Yanai-Kim theory, which involves macrodynamical equations for the dynamics of...

Source: http://arxiv.org/abs/cond-mat/0209258v2

29
29

Sep 18, 2013
09/13

by
Seiji Miyoshi; Tatsuya Uezu; Masato Okada

texts

######
eye 29

######
favorite 0

######
comment 0

Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as...

Source: http://arxiv.org/abs/cond-mat/0605176v1

10
10.0

Sep 18, 2013
09/13

by
Hideto Utsumi; Seiji Miyoshi; Masato Okada

texts

######
eye 10

######
favorite 0

######
comment 0

We analyze the generalization performance of a student in a model composed of nonlinear perceptrons: a true teacher, ensemble teachers, and the student. We calculate the generalization error of the student analytically or numerically using statistical mechanics in the framework of on-line learning. We treat two well-known learning rules: Hebbian learning and perceptron learning. As a result, it is proven that the nonlinear model shows qualitatively different behaviors from the linear model....

Source: http://arxiv.org/abs/0705.2318v1

7
7.0

Sep 22, 2013
09/13

by
Seiji Miyoshi; Kazuyuki Hara; Masato Okada

texts

######
eye 7

######
favorite 0

######
comment 0

Ensemble learning of $K$ nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that...

Source: http://arxiv.org/abs/cond-mat/0403632v3

7
7.0

Sep 20, 2013
09/13

by
Masahiro Urakami; Seiji Miyoshi; Masato Okada

texts

######
eye 7

######
favorite 0

######
comment 0

In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine. In this paper we analyze the generalization performance of a new student supervised by a moving machine. A model composed of a fixed true teacher, a moving teacher, and a student is treated theoretically using statistical mechanics, where the true teacher is a nonmonotonic perceptron and the others are simple...

Source: http://arxiv.org/abs/cs/0612117v1